For example,Бобцов

Prompt-based multi-task learning for robust text retrieval

Annotation

The exponential growth of digital information necessitates the development of robust text retrieval methods since most of the methods are domain or task-specific which limits their implementation. In this case multi-task learning is a promising alternative as it helps a model to have more meaningful embeddings; however such cases require usage of task separation methods. Many studies explore multi-task learning to improve generalization but tend to focus on large models. However, in real-world, speech analytics tasks that require searching through hundreds of millions of vectors in real-time, smaller models become more appropriate. This paper presents a novel approach to enhance the robustness of multi-task text retrieval models through the use of prompts. We use contrastive learning to train encoder models both in single-task and multi-task configurations and compare their performances as well as analyze the efficiency of different prompt usage strategies including hard prompts represented by explicit natural language instructions and soft prompts of varying lengths represented by model special tokens. Experiments are conducted by applying prompts to both the query and candidate document as well as to queries only keeping the candidate without prompts to reuse pre-encoded candidates in multi-task retrieval without significant quality loss. The obtained results are compared using R@1, R@5, and MRR metrics which are most applicable for evaluating in-domain and out-of-domain search. Single-task models show better performance on in-domain training data, while multi-task models demonstrate superior performance on out-of-domain data highlighting their increased robustness to domain shifts. Applying prompts to both elements–query and document–yields better performance than applying them solely to the query. Soft prompts are found to be preferable to hard as they better adapt the model to different domains. The findings of this study can be useful for improving text retrieval models, especially in scenarios involving multi-task systems where high adaptability and performance on new data are required. Trainable prompts could be an effective tool for enhancing the flexibility of models in various applications, such as information retrieval and question-answering systems.

Keywords

Articles in current issue